Search results

1 – 3 of 3
Article
Publication date: 9 September 2014

Rüdiger Rolf, Hannah Reuter, Martin Abel and Kai-Christoph Hamborg

– Improving the use of annotations in lecture recordings.

Abstract

Purpose

Improving the use of annotations in lecture recordings.

Design/methodology/approach

Requirements analysis with scenario based design (SBD) on focus groups.

Findings

These seven points have been extracted from the feedback of the focus groups: (1) Control of the annotation feature (turn on/turn off). (2) An option to decide who is able to see their comments (groups, lecturer, friends). (3) An easy and paper-like experience in creating a comment. (4) An option to discuss comments. (5) An option to import already existing comments. (6) Color-coding of the different types of comments. (7) An option to print their annotations within the context of the recording.

Research limitations/implications

The study was performed to improve the open-source lecture recording system Opencast Matterhorn.

Originality/value

Annotations can help to enable the students that use lecture recordings to move from a passive watching to an active viewing and reflecting.

Details

Interactive Technology and Smart Education, vol. 11 no. 3
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 9 September 2014

Benjamin Wulff, Alexander Fecke, Lisa Rupp and Kai-Christoph Hamborg

The purpose of this work is to present a prototype of the system and the results from a technical evaluation and a study on possible effects of recordings with active camera…

Abstract

Purpose

The purpose of this work is to present a prototype of the system and the results from a technical evaluation and a study on possible effects of recordings with active camera control on the learner. An increasing number of higher education institutions have adopted the lecture recording technology in the past decade. Even though some solutions already show a very high degree of automation, active camera control can still only be realized with the use of human labor. Aiming to fill this gap, the LectureSight project is developing a free solution for active autonomous camera control for presentation recordings. The system uses a monocular overview camera to analyze the scene. Adopters can formulate camera control strategies in a simple scripting language to adjust the system’s behavior to the specific characteristics of a presentation site.

Design/methodology/approach

The system is based on a highly modularized architecture to make it easily extendible. The prototype has been tested in a seminar room and a large lecture hall. Furthermore, a study was conducted in which students from two universities prepared for a simulated exam with an ordinary lecture recording and a recording produced with the LectureSight technology.

Findings

The technical evaluation showed a good performance of the prototype but also revealed some technical constraints. The results of the psychological study give evidence that the learner might benefit from lecture videos in which the camera follows the presenter so that gestures and facial expression are easily perceptible.

Originality/value

The LectureSight project is the first open-source initiative to care about the topic of camera control for presentation recordings. This opens way for other projects building upon the LectureSight architecture. The simulated exam study gave evidence of a beneficial effect on students learning success and needs to be reproduced. Also, if the effect is proven to be consistent, the mechanism behind it is worth to be investigated further.

Details

Interactive Technology and Smart Education, vol. 11 no. 3
Type: Research Article
ISSN: 1741-5659

Keywords

Content available
Article
Publication date: 9 September 2014

Markus Ketterl and Christopher Brooks and Florian Schimanke

164

Abstract

Details

Interactive Technology and Smart Education, vol. 11 no. 3
Type: Research Article
ISSN: 1741-5659

1 – 3 of 3